759 research outputs found
Fuzzy audio similarity measures based on spectrum histograms and fluctuation patterns
Spectrum histograms and fluctuation patterns are representations of audio fragments. By comparing these representations, we can determine the similarity between the corresponding fragments. Traditionally, this is done using the Euclidian distance. We propose fuzzy similarity measures as an alternative. First we introduce some well-known fuzzy similarity measures, together with certain properties that can be desirable or useful in practice. In particular we present several forms of restrictability, which allow to reduce the computation time in practical applications. Next, we show that fuzzy similarity measures can be used to compare spectrum histograms and fluctuation patterns. Finally, we describe some experimental observations for this fuzzy approach of constructing audio similarity measures
Education alignment
This essay reviews recent developments in embedding data
management and curation skills into information technology,
library and information science, and research-based
postgraduate courses in various national contexts. The essay
also investigates means of joining up formal education with
professional development training opportunities more
coherently. The potential for using professional internships as a
means of improving communication and understanding between
disciplines is also explored. A key aim of this essay is to identify
what level of complementarity is needed across various
disciplines to most effectively and efficiently support the entire
data curation lifecycle
Technical alignment
This essay discusses the importance of the areas of
infrastructure and testing to help digital preservation services
demonstrate reliability, transparency, and accountability. It
encourages practitioners to build a strong culture in which
transparency and collaborations between technical frameworks
are valued highly. It also argues for devising and applying
agreed-upon metrics that will enable the systematic analysis of
preservation infrastructure. The essay begins by defining
technical infrastructure and testing in the digital preservation
context, provides case studies that exemplify both progress and
challenges for technical alignment in both areas, and concludes
with suggestions for achieving greater degrees of technical
alignment going forward
Issues in digital preservation: towards a new research agenda
Digital Preservation has evolved into a specialized, interdisciplinary research discipline of its own, seeing significant increases in terms of research capacity, results, but also challenges. However, with this specialization and subsequent formation of a dedicated subgroup of researchers active in this field, limitations of the challenges addressed can be observed. Digital preservation research may seem to react to problems arising, fixing problems that exist now, rather than proactively researching new solutions that may be applicable only after a few years of maturing. Recognising the benefits of bringing together researchers and practitioners with various professional backgrounds related to digital preservation, a seminar was organized in Schloss Dagstuhl, at the Leibniz Center for Informatics (18-23 July 2010), with the aim of addressing the current digital preservation challenges, with a specific focus on the automation aspects in this field. The main goal of the seminar was to outline some research challenges in digital preservation, providing a number of "research questions" that could be immediately tackled, e.g. in Doctoral Thesis. The seminar intended also to highlight the need for the digital preservation community to reach out to IT research and other research communities outside the immediate digital preservation domain, in order to jointly develop solutions
Testing supervised classifiers based on non-negative matrix factorization to musical instrument classification
In this paper, a class of algorithms for automatic classification of individual musical instrument sounds is presented. Two feature sets were employed, the first containing perceptual features and MPEG-7 descriptors and the second containing rhythm patterns developed for the SOMeJB project. The features were measured for 300 sound recordings consisting of 6 different musical instrument classes. Subsets of the feature set are selected using branch-and-bound search, obtaining the most suitable features for classification. A class of supervised classifiers is developed based on the non-negative matrix factorization (NMF). The standard NMF method is examined as well as its modifications: the local and the sparse NMF. The experiments compare the two feature sets alongside the various NMF algorithms. The results demonstrate an almost perfect classification for the first set using the standard NMF algorithm (classification error 1.0 %), outperforming the state-of-the-art techniques tested for the aforementioned experiment
Reinforcement Learning in Sparse-Reward Environments with Hindsight Policy Gradients
A reinforcement learning agent that needs to pursue different goals across episodes requires a goal-conditional policy. In addition to their potential to generalize desirable behavior to unseen goals, such policies may also enable higher-level planning based on subgoals. In sparse-reward environments, the capacity to exploit information about the degree to which an arbitrary goal has been achieved while another goal was intended appears crucial to enabling sample efficient learning. However, reinforcement learning agents have only recently been endowed with such capacity for hindsight. In this letter, we demonstrate how hindsight can be introduced to policy gradient methods, generalizing this idea to a broad class of successful algorithms. Our experiments on a diverse selection of sparse-reward environments show that hindsight leads to a remarkable increase in sample efficiency
Recurrent Neural-Linear Posterior Sampling for Nonstationary Contextual Bandits
An agent in a nonstationary contextual bandit problem should balance between
exploration and the exploitation of (periodic or structured) patterns present
in its previous experiences. Handcrafting an appropriate historical context is
an attractive alternative to transform a nonstationary problem into a
stationary problem that can be solved efficiently. However, even a carefully
designed historical context may introduce spurious relationships or lack a
convenient representation of crucial information. In order to address these
issues, we propose an approach that learns to represent the relevant context
for a decision based solely on the raw history of interactions between the
agent and the environment. This approach relies on a combination of features
extracted by recurrent neural networks with a contextual linear bandit
algorithm based on posterior sampling. Our experiments on a diverse selection
of contextual and noncontextual nonstationary problems show that our recurrent
approach consistently outperforms its feedforward counterpart, which requires
handcrafted historical contexts, while being more widely applicable than
conventional nonstationary bandit algorithms. Although it is very difficult to
provide theoretical performance guarantees for our new approach, we also prove
a novel regret bound for linear posterior sampling with measurement error that
may serve as a foundation for future theoretical work
- …